Readme.md


The Right to Be Forgotten in the AI Age

Published on May 3, 2025 by Arnav Gupta | Back to home page



The Right to Be Forgotten in the AI Age

From searching for a gift on Amazon to doomscrolling on social media, your private information is constantly being tracked, stored, and sold. Your buying habits, browsing history, home address, income, and even your connections on social media are collected—often without your explicit consent. Even if you delete an account or update your details, backups, archives, and third-party databases may still retain your data. Search engines index your online activity, AI models train on your content, and companies continue to profit from your information long after you’re gone. In today’s digital age, privacy—the fundamental right to control access to one’s personal information—is no longer enough. We need the right to be forgotten (RTBF)—a safeguard that allows individuals to reclaim control over their personal data and erase their digital footprint.

RTBF in Europe

The European Union’s General Data Protection Regulation (GDPR), specifically Article 17, already recognizes RTBF by giving individuals the right to request the deletion of their personal data when data is no longer necessary for its original purpose, when consent is withdrawn, or when the data has been unlawfully processed. The landmark case that established RTBF in the EU was Google Spain v. Mario Costeja González (2014) in which González filed a complaint against Google Spain and Spanish newspaper La Vanguardia to remove search results on a 1998 article about a foreclosure notice issued against him. While the newspaper had lawfully published the article, González argued that the information was outdated, no longer relevant, and harmed his reputation.

The Court of Justice of the European Union ruled that search engines are data controllers under EU data protection laws and must comply with removal requests if personal data is outdated, irrelevant, or excessive. However, the ruling also established that RTBF is not absolute—search engines need not comply if the data is in the public interest or essential to the exercise of free expression, a clause vague enough to create uncertainty and room for interpretation. This tension between RTBF and freedom of speech continues to spark debate: Should individuals have the right to bury embarrassing or harmful truths? And who decides what is “in the public interest”?

The American Approach

Unlike the EU, the United States does not recognize the right to be forgotten. The U.S. places a stronger emphasis on free speech and public access to information, which can make it extremely difficult for individuals to remove outdated or unwanted content—even if it’s damaging or no longer relevant. While individuals in the U.S. may file Subject Access Requests (SARs)—which allow users to see a copy of the personal data a company holds on them—this doesn’t automatically guarantee deletion or erasure. This fundamental gap in privacy protection leaves Americans particularly vulnerable in the digital age.

Implementation Challenges

Companies face significant challenges in implementing RTBF-like rights. Data is often fragmented across legacy systems, third-party platforms, and cloud storage, making it difficult to locate and erase comprehensively. Verifying the identity of the person requesting deletion—especially in a global digital economy—is another logistical and security hurdle.

The overwhelming quantity of RTBF requests can also strain a company’s resources. Based on a Google RTBF transparency report, Google received 3.2 million requests from 502,000 requesters in the first five years of RTBF implementation and approved around 45%. This volume demonstrates both the high demand for these protections and the selective nature of compliance.

Moreover, many companies rely heavily on user data for targeted advertising, personalization, and algorithmic optimization, creating a built-in disincentive to comply with deletion requests, even where they are legally mandated. Their business models fundamentally depend on the very data users might want removed.

The AI Complication

Looking ahead, the emergence of generative AI and large language models (LLMs) adds an entirely new layer of complexity to the enforcement of RTBF. Once personal data has been scraped and used to train an AI model, that data becomes embedded in the model’s parameters and outputs. It’s no longer stored in a traditional database that can be queried and erased—instead, it’s part of a probabilistic system capable of “hallucinating” or reproducing personal details without clear provenance. How do you erase something from a model that can’t even identify the source it learned from?

This raises urgent questions: Should individuals have the right to request that AI models be retrained or altered if they include or reproduce personal data? How would such requests be verified and enforced across decentralized platforms and black-box algorithms? And can RTBF even be meaningfully applied in an era where information is not just stored, but absorbed into intelligent systems?

The Future of Digital Erasure

The right to be forgotten represents a crucial battleground in the ongoing struggle between individual privacy and corporate interests. As AI systems become increasingly sophisticated and ubiquitous, the challenges of implementing RTBF will only grow more complex. We need technical solutions that can accommodate deletion requests within AI systems, regulatory frameworks with real enforcement power, and perhaps most importantly, a shift in how we conceptualize digital identity and memory.

If privacy is to mean anything in the age of AI, we will need not only stronger regulatory frameworks but entirely new technical and ethical tools—ones that go beyond deletion, toward accountability, transparency, and truly user-centric data control. Our digital legacies deserve no less protection than our physical ones.